Tutorial for Ground Consistency Layer#911
Tutorial for Ground Consistency Layer#911haider8645 wants to merge 7 commits intoros-navigation:masterfrom
Conversation
|
I guess the pre-commit failed job is not a problem, or? |
Signed-off-by: haider8645 <haider_lodhi@hotmail.com>
Signed-off-by: haider8645 <haider_lodhi@hotmail.com>
Signed-off-by: haider8645 <haider_lodhi@hotmail.com>
Signed-off-by: haider8645 <haider_lodhi@hotmail.com>
Signed-off-by: haider8645 <haider_lodhi@hotmail.com>
Signed-off-by: haider8645 <haider_lodhi@hotmail.com>
|
Force pushes are because I always forget to somehow sign-off the commits... |
Signed-off-by: Haider <haider_lodhi@hotmail.com>
|
Check on the precommit CI issues to resolve - it shows issues you need to fix in your file |
There was a problem hiding this comment.
I think you may also benefit from the next draft from reading a few of the existing Nav2 tutorials. This doesn't really step through how to set it up (i.e. add the layer to your navigation yaml config file) or understand the parameters (here's some key parameters to tune/setup) or any of the key user-facing tutorial features needed, I don't think. Those are the details people will read this to learn -- how to setup & tune & use.
I think you can blow past the gazebo world/model details, since those are just to show case the feature. Tell them to launch Nav2 + gazebo using this launch file (which does Nav2 + gazebo + KISS ICP + husky) and just tell them it launches all of these things (and brief reasons about what they do) to as a test / demo environment. I don't think it needs to be expanded much past that since the point here is more related to your algorithm + the costmap layer.
| @@ -0,0 +1,363 @@ | |||
| ========================================== | |||
| NAV2 Ground Consistency Demo | |||
There was a problem hiding this comment.
Perhaps change to Ground Terrain Segmentation using 3D Lidar?
ALso the ==== has to match the char length of the title to render properly
| NAV2 Ground Consistency Demo | ||
| ========================================== | ||
|
|
||
| A tutorial demonstrating terrain-aware navigation using 3D lidar ground segmentation with Nav2's `ground consistency costmap layer <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_. Learn how to classify terrain into traversable ground and obstacles, then use that classification to build smarter costmaps for safer navigation. |
There was a problem hiding this comment.
| A tutorial demonstrating terrain-aware navigation using 3D lidar ground segmentation with Nav2's `ground consistency costmap layer <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_. Learn how to classify terrain into traversable ground and obstacles, then use that classification to build smarter costmaps for safer navigation. | |
| This tutorial demonstrates terrain-aware navigation using 3D lidar ground segmentation with the `ground consistency costmap layer <https://github.com/dfki-ric/nav2_ground_consistency_costmap_plugin>`_. This allows users to classify terrain into traversable ground and obstacles for intelligent outdoor navigation in non-planar environments for smarter costmaps for safer navigation. |
| Requirements | ||
| ============ | ||
|
|
||
| It is assumed you have ROS 2 Jazzy installed. To install all required dependencies and clone necessary repositories, run: |
There was a problem hiding this comment.
See other tutorials & link to the getting started guide if they don't have Nav2 setup/installed yet.
Then: Then, install the required depedencies:
| .. code-block:: bash | ||
|
|
||
| cd nav2_ground_consistency_demo | ||
| bash install_dependencies.bash ~/path_to_my_custom_workspace |
There was a problem hiding this comment.
This needs to be updated/fixed. We should be using rosdep for any dependencies that are installed in apt. If we need to build other things from source, use .repos files.
Overall this installation should basically read:
- Install ROS 2 / Nav2 --> if not link to the getting started pages (see other tutorials)
- Then, install the source code for the tutorial used --> clone & rosdep & repos install
| Ground segmentation is the first step in terrain-aware navigation. It takes raw 3D lidar point cloud data and classifies every point into two categories: | ||
|
|
||
| - **Ground Points**: Points that lie on the terrain surface (traversable) | ||
| - **Non-Ground Points**: Points above the terrain surface (potential obstacles) |
There was a problem hiding this comment.
| Ground segmentation is the first step in terrain-aware navigation. It takes raw 3D lidar point cloud data and classifies every point into two categories: | |
| - **Ground Points**: Points that lie on the terrain surface (traversable) | |
| - **Non-Ground Points**: Points above the terrain surface (potential obstacles) | |
| Ground segmentation is the first step in terrain-aware navigation. It takes sensor data and classifies its readings into at least two categories: ground and non-ground. This can be from raw 3D lidar point clouds as used in this tutorial, but also ground segmentation may be the result of RGBD cameras, monocular vision, or other source using geometric of AI based technologies. |
| Step 3: Move the Robot | ||
| ---------------------- | ||
|
|
||
| To move the robot around and see ground consistency in action: | ||
|
|
||
| In **Gazebo**: | ||
|
|
||
| 1. Click the menu (⋮) in the top-right corner | ||
| 2. Search for "Teleop" and select the teleop widget | ||
| 3. Change the topic field to ``/model/husky/cmd_vel`` | ||
| 4. Use the sliders or buttons to move the robot forward/backward and rotate |
There was a problem hiding this comment.
Use Nav2 via the navigat to pose in rviz instead, this demo is to show how to use this layer with nav2, so you should use nav2 😉
| - **On slopes**: Ground segmentation classifies the slope as traversable (not blocked) | ||
| - **Over small obstacles**: Debris below robot height doesn't appear as lethal obstacles | ||
| - **Under overpasses**: If your world has them, the robot can navigate underneath | ||
|
|
There was a problem hiding this comment.
Here's a good place to add a youtuve video or two embedded
| KISS-ICP Odometry | ||
| ----------------- | ||
|
|
||
| KISS-ICP is used as an odometry solution in the demo. | ||
|
|
||
| .. code-block:: bibtex | ||
|
|
||
| @article{vizzo2023ral, | ||
| author = {Vizzo, Ignacio and Guadagnino, Tiziano and Mersch, Benedikt and Wiesmann, Louis and Behley, Jens and Stachniss, Cyrill}, | ||
| title = {{KISS-ICP: In Defense of Point-to-Point ICP -- Simple, Accurate, and Robust Registration If Done the Right Way}}, | ||
| journal = {IEEE Robotics and Automation Letters (RA-L)}, | ||
| pages = {1029--1036}, | ||
| doi = {10.1109/LRA.2023.3236571}, | ||
| volume = {8}, | ||
| number = {2}, | ||
| year = {2023}, | ||
| codeurl = {https://github.com/PRBonn/kiss-icp}, | ||
| } |
There was a problem hiding this comment.
Remove please, this isn't an academic paper
| Husky Robot Model | ||
| ----------------- | ||
|
|
||
| This demo uses a local copy of the COSTAR_HUSKY_SENSOR_CONFIG_1 model from Gazebo Fuel. The local copy has been modified to enable IMU orientation output (``enable_orientation=1``) for proper quaternion-based orientation estimates, which is required for ground segmentation with IMU integration. A DiffDrive plugin was included in the robot model which executes motion commands sent to the robot. |
| **Original Model Citation:** | ||
|
|
||
| .. code-block:: bibtex | ||
|
|
||
| @online{GazeboFuel-OpenRobotics-COSTAR_HUSKY_SENSOR_CONFIG_1, | ||
| title={COSTAR_HUSKY_SENSOR_CONFIG_1}, | ||
| organization={Open Robotics}, | ||
| date={2023}, | ||
| month={September}, | ||
| day={3}, | ||
| author={OpenRobotics}, | ||
| url={https://fuel.gazebosim.org/1.0/OpenRobotics/models/COSTAR_HUSKY_SENSOR_CONFIG_1}, | ||
| } |
Basic Info
Related PR: ros-navigation/navigation2_tutorials#140
Description of contribution in a few bullet points
Performance Anaylsis on the demo scenario: